19 research outputs found

    HeAT – a Distributed and GPU-accelerated Tensor Framework for Data Analytics

    Get PDF
    In order to cope with the exponential growth in available data, the efficiency of data analysis and machine learning libraries have recently received increased attention. Although corresponding array-based numerical kernels have been significantly improved, most are limited by the resources available on a single computational node. Consequently, kernels must exploit distributed resources, e.g., distributed memory architectures. To this end, we introduce HeAT, an array-based numerical programming framework for large-scale parallel processing with an easy-to-use NumPy-like API. HeAT utilizes PyTorch as a node-local eager execution engine and distributes the workload via MPI on arbitrarily large high-performance computing systems. It provides both low-level array-based computations, as well as assorted higher-level algorithms. With HeAT, it is possible for a NumPy user to take advantage of their available resources, significantly lowering the barrier to distributed data analysis. Compared with applications written in similar frameworks, HeAT achieves speedups of up to two orders of magnitude

    HeAT -- a Distributed and GPU-accelerated Tensor Framework for Data Analytics

    Get PDF
    To cope with the rapid growth in available data, the efficiency of data analysis and machine learning libraries has recently received increased attention. Although great advancements have been made in traditional array-based computations, most are limited by the resources available on a single computation node. Consequently, novel approaches must be made to exploit distributed resources, e.g. distributed memory architectures. To this end, we introduce HeAT, an array-based numerical programming framework for large-scale parallel processing with an easy-to-use NumPy-like API. HeAT utilizes PyTorch as a node-local eager execution engine and distributes the workload on arbitrarily large high-performance computing systems via MPI. It provides both low-level array computations, as well as assorted higher-level algorithms. With HeAT, it is possible for a NumPy user to take full advantage of their available resources, significantly lowering the barrier to distributed data analysis. When compared to similar frameworks, HeAT achieves speedups of up to two orders of magnitude.Comment: 10 pages, 8 figures, 5 listings, 1 tabl

    The Helmholtz Analytics Toolkit (Heat) and its role in the landscape of massively-parallel scientific Python

    Get PDF
    When it comes to enhancing exploitation of massive data, machine learning methods are at the forefront of researchers’ awareness. Much less so is the need for, and the complexity of, applying these techniques efficiently across large-scale, memory-distributed data volumes. In fact, these aspects typical for the handling of massive data sets pose major challenges to the vast majority of research communities, in particular to those without a background in high-performance computing. Often, the standard approach involves breaking up and analyzing data in smaller chunks; this can be inefficient and prone to errors, and sometimes it might be inappropriate at all because the context of the overall data set can get lost. The Helmholtz Analytics Toolkit (Heat) library offers a solution to this problem by providing memory-distributed and hardware-accelerated array manipulation, data analytics, and machine learning algorithms in Python. The main objective is to make memory-intensive data analysis possible across various fields of research ---in particular for domain scientists being non-experts in traditional high-performance computing who nevertheless need to tackle data analytics problems going beyond the capabilities of a single workstation. The development of this interdisciplinary, general-purpose, and open-source scientific Python library started in 2018 and is based on collaboration of three institutions (German Aerospace Center DLR, Forschungszentrum Jülich FZJ, Karlsruhe Institute of Technology KIT) of the Helmholtz Association. The pillars of its development are... - ...to enable memory distribution of n-dimensional arrays, - to adopt PyTorch as process-local compute engine (hence supporting GPU-acceleration), - to provide memory-distributed (i.e., multi-node, multi-GPU) array operations and algorithms, optimizing asynchronous MPI-communication (based on mpi4py) under the hood, and - to wrap functionalities in NumPy- or scikit-learn-like API to achieve porting of existing applications with minimal changes and to enable the usage by non-experts in HPC. In this talk we will give an illustrative overview on the current features and capabilities of our library. Moreover, we will discuss its role in the existing ecosystem of distributed computing in Python, and we will address technical and operational challenges in further development

    The missing link between massive data and AI: parallel computing with Heat

    No full text
    When it comes to enhancing exploitation of massive data, machine learning and AI methods are very much at the forefront of our awareness. Much less so is the need for, and complexity of, applying these techniques efficiently across memory-distributed data volumes. Heat [1, 2] is an open-source Python library for high-performance data analytics, machine learning, and deep learning. It provides highly optimized algorithms and data structures for tensor computations using CPUs, GPUs and distributed cluster systems. Heat's Numpy-like API makes writing scalable, GPU-accelerated applications straightforward - at the same time, parallelism implemented under the hood via MPI provides a significant improvement in efficiency and performance with respect to, e.g., Dask. Born out of a large-scale collaboration in applied sciences, Heat also acts a platform for collaboration and knowledge transfer within data-intensive science. In this presentation, I will show you the inner workings of the library, tell you about our collaborations with the astrophysics and space science community (massively parallel signal-processing capabilities for the SKA-MPG telescope among others) and hopefully gain from you some insight into how to best support data- intensive astro operations going forward.   References: [1] Gotz, M., Debus, C., Coquelin, et al.: 'HeAT - a Distributed and GPU-accelerated Tensor Framework for Data Analytics'; [2] https://github.com/helmholtz-analytics/hea

    HeAT -- a Distributed and GPU-accelerated Tensor Framework for Data Analytics

    No full text
    To cope with the rapid growth in available data, the efficiency of data analysis and machine learning libraries has recently received increased attention. Although great advancements have been made in traditional array-based computations, most are limited by the resources available on a single computation node. Consequently, novel approaches must be made to exploit distributed resources, e.g. distributed memory architectures. To this end, we introduce HeAT, an array-based numerical programming framework for large-scale parallel processing with an easy-to-use NumPy-like API. HeAT utilizes PyTorch as a node-local eager execution engine and distributes the workload on arbitrarily large high-performance computing systems via MPI. It provides both low-level array computations, as well as assorted higher-level algorithms. With HeAT, it is possible for a NumPy user to take full advantage of their available resources, significantly lowering the barrier to distributed data analysis. When compared to similar frameworks, HeAT achieves speedups of up to two orders of magnitude

    HeAT - a Distributed and GPU-accelerated Tensor Framework for Data Analytics

    No full text
    To cope with the rapid growth in available data, theefficiency of data analysis and machine learning libraries has re-cently received increased attention. Although great advancementshave been made in traditional array-based computations, mostare limited by the resources available on a single computationnode. Consequently, novel approaches must be made to exploitdistributed resources, e.g. distributed memory architectures. Tothis end, we introduce HeAT, an array-based numerical pro-gramming framework for large-scale parallel processing withan easy-to-use NumPy-like API. HeAT utilizes PyTorch as anode-local eager execution engine and distributes the workloadon arbitrarily large high-performance computing systems viaMPI. It provides both low-level array computations, as wellasassorted higher-level algorithms. With HeAT, it is possible for aNumPy user to take full advantage of their available resources,significantly lowering the barrier to distributed data analysis.When compared to similar frameworks, HeAT achieves speedupsof up to two orders of magnitude
    corecore